Face Generation

In this project, you'll use generative adversarial networks to generate new images of faces.

Get the Data

You'll be using two datasets in this project:

  • MNIST
  • CelebA

Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.

If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".

In [12]:
#data_dir = './data'

# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Found mnist Data
Found celeba Data

Explore the Data

MNIST

As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.

In [13]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Out[13]:
<matplotlib.image.AxesImage at 0x7f7ca9986eb8>

CelebA

The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.

In [14]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Out[14]:
<matplotlib.image.AxesImage at 0x7f7ca9972748>

Preprocess the Data

Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.

The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).

Build the Neural Network

You'll build the components necessary to build a GANs by implementing the following functions below:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU

In [15]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
TensorFlow Version: 1.0.0
Default GPU Device: /gpu:0

Input

Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
  • Z input placeholder with rank 2 using z_dim.
  • Learning rate placeholder with rank 0.

Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)

In [16]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    # TODO: Implement Function
    input_real = tf.placeholder(tf.float32,(None, image_width, image_height, image_channels))
    input_z = tf.placeholder(tf.float32, (None, z_dim))

    lr = tf.placeholder(tf.float32)

    return input_real, input_z, lr


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
Tests Passed

Discriminator

Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).

In [17]:
def discriminator(images, reuse=False):
    """
    Create the discriminator network
    :param image: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    # TODO: Implement Function
    alpha = 0.1
    with tf.variable_scope('discriminator', reuse=reuse):
        # Input layer is 28x28ximage_channels
        x1 = tf.layers.conv2d(images, 64, 5, strides=2, padding='valid')
        relu1 = tf.maximum(alpha * x1, x1)
        # 12x12x64
        x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
        bn2 = tf.layers.batch_normalization(x2, training=True)
        relu2 = tf.maximum(alpha * bn2, bn2)
        # 6x6x128
        x3 = tf.layers.conv2d(relu2, 256, 5, strides=1, padding='same')
        bn3 = tf.layers.batch_normalization(x3, training=True)
        relu3 = tf.maximum(alpha * bn3, bn3)
        # 6x6x256
        # Flatten it
        flat = tf.reshape(relu3, (-1, 6*6*256))
        logits = tf.layers.dense(flat, 1)
        out = tf.sigmoid(logits)
        return out, logits

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Tests Passed

Generator

Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.

In [18]:
def generator(z, out_channel_dim, is_train=True):
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    # TODO: Implement Function
    alpha = 0.1
    with tf.variable_scope('generator', reuse=not is_train):
        # First fully connected layer
        x1 = tf.layers.dense(z, 7*7*512)
        # Reshape it to start the convolutional stack
        x1 = tf.reshape(x1, (-1, 7, 7,512))
        x1 = tf.layers.batch_normalization(x1, training=is_train)
        x1 = tf.maximum(alpha * x1, x1)
        # 7x7x400
        x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')
        x2 = tf.layers.batch_normalization(x2, training=is_train)
        x2 = tf.maximum(alpha * x2, x2)
        # 14x14x200
        x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')
        x3 = tf.layers.batch_normalization(x3, training=is_train)
        x3 = tf.maximum(alpha * x3, x3)
        # 28x28x100
        x4 = tf.layers.conv2d(x3, 64, 5, strides=1, padding='same')
        x4 = tf.layers.batch_normalization(x4, training=is_train)
        x4 = tf.maximum(alpha * x4, x4)
        # 28x28x32        
        # Output layer        
        logits = tf.layers.conv2d(x4, out_channel_dim, 5, strides=1, padding='same')
        # 28x28x3
                
        return tf.tanh(logits)

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Tests Passed

Loss

Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)
In [19]:
def model_loss(input_real, input_z, out_channel_dim):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    # TODO: Implement Function
    smooth = 0.1 
    g_out = generator(input_z, out_channel_dim=out_channel_dim)
    d_out, d_real_logits = discriminator(input_real)
    d_z, d_z_logits = discriminator(g_out, True)
    d_real_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_real_logits, labels=tf.ones_like(d_out)*(1-smooth)))
    d_fake_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_z_logits, labels=tf.zeros_like(d_z)))
    g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_z_logits, labels=tf.ones_like(d_z)))

    d_loss = d_real_loss + d_fake_loss                                                   

    return d_loss, g_loss

Optimization

Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).

In [20]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    # TODO: Implement Function
    train_vars = tf.trainable_variables()
    d_vars = [v for v in train_vars if v.name.startswith('discriminator')]
    g_vars = [v for v in train_vars if v.name.startswith('generator')]
    with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
        d_opt = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=beta1).minimize(loss=d_loss, var_list=d_vars)
        g_opt = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=beta1).minimize(loss=g_loss, var_list=g_vars)    
    return d_opt, g_opt

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
Tests Passed

Neural Network Training

Show Output

Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.

In [21]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, False),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Train

Implement train to build and train the GANs. Use the following functions you implemented:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.

In [22]:
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # TODO: Build Model
    out_channel_dim = data_shape[3]
    input_real, input_z, lr = model_inputs(data_shape[1], data_shape[2], out_channel_dim, z_dim)
    d_loss, g_loss = model_loss(input_real, input_z, out_channel_dim)
    d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
    i_show = 0
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch_i in range(epoch_count):
            for batch_images in get_batches(batch_size):
                # TODO: Train Model
                batch_images = batch_images * 2
                z_images = np.random.uniform(-1, 1, size=(batch_size, z_dim))
                feed_data = {input_real: batch_images, input_z: z_images, lr:learning_rate}
                # Run twice the gen optimizer to get quick results
                sess.run(g_opt, feed_dict=feed_data)
                sess.run(g_opt, feed_dict=feed_data)
                sess.run(d_opt, feed_dict=feed_data)
                
                if i_show % 20 == 0:
                    dis_loss = sess.run(d_loss, feed_dict=feed_data)
                    gen_loss = sess.run(g_loss, feed_dict=feed_data)
                    print('Epoch %s/%s, Step %s, dis_loss: %s, gen_loss %s' % (epoch_i+1, epoch_count, i_show, dis_loss, gen_loss))
                if i_show % 100 == 0:
                    show_generator_output(sess, 64, input_z, out_channel_dim, data_image_mode)
                i_show += 1
                

MNIST

Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.

CelebA

Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.

In [ ]:
batch_size = 32
z_dim = 100
l_rate = 0.0005
beta1 = 0.5

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2

mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, l_rate, beta1, mnist_dataset.get_batches,
          mnist_dataset.shape, mnist_dataset.image_mode)
Epoch 1/2, Step 0, dis_loss: 0.803479, gen_loss 13.2762
Epoch 1/2, Step 20, dis_loss: 1.31256, gen_loss 6.91854
Epoch 1/2, Step 40, dis_loss: 1.43839, gen_loss 1.27243
Epoch 1/2, Step 60, dis_loss: 1.29724, gen_loss 1.13096
Epoch 1/2, Step 80, dis_loss: 1.28141, gen_loss 1.39317
Epoch 1/2, Step 100, dis_loss: 1.38328, gen_loss 0.814962
Epoch 1/2, Step 120, dis_loss: 1.48626, gen_loss 1.38906
Epoch 1/2, Step 140, dis_loss: 1.34538, gen_loss 1.12159
Epoch 1/2, Step 160, dis_loss: 1.37692, gen_loss 1.12727
Epoch 1/2, Step 180, dis_loss: 1.33915, gen_loss 0.649994
Epoch 1/2, Step 200, dis_loss: 1.40832, gen_loss 1.2793
Epoch 1/2, Step 220, dis_loss: 1.38601, gen_loss 0.642969
Epoch 1/2, Step 240, dis_loss: 1.46501, gen_loss 1.45018
Epoch 1/2, Step 260, dis_loss: 1.42432, gen_loss 1.25191
Epoch 1/2, Step 280, dis_loss: 1.37139, gen_loss 0.740887
Epoch 1/2, Step 300, dis_loss: 1.40441, gen_loss 0.718435
Epoch 1/2, Step 320, dis_loss: 1.36375, gen_loss 0.908165
Epoch 1/2, Step 340, dis_loss: 1.34491, gen_loss 0.937514
Epoch 1/2, Step 360, dis_loss: 1.30785, gen_loss 0.860194
Epoch 1/2, Step 380, dis_loss: 1.34319, gen_loss 0.814188
Epoch 1/2, Step 400, dis_loss: 1.36803, gen_loss 1.09927
Epoch 1/2, Step 420, dis_loss: 1.32269, gen_loss 0.830118
Epoch 1/2, Step 440, dis_loss: 1.33387, gen_loss 0.763626
Epoch 1/2, Step 460, dis_loss: 1.29774, gen_loss 0.95837
Epoch 1/2, Step 480, dis_loss: 1.3554, gen_loss 0.85543
Epoch 1/2, Step 500, dis_loss: 1.39643, gen_loss 1.121
Epoch 1/2, Step 520, dis_loss: 1.37812, gen_loss 0.590018
Epoch 1/2, Step 540, dis_loss: 1.39585, gen_loss 0.625553
Epoch 1/2, Step 560, dis_loss: 1.41083, gen_loss 0.57221
Epoch 1/2, Step 580, dis_loss: 1.39633, gen_loss 0.692837
Epoch 1/2, Step 600, dis_loss: 1.41374, gen_loss 1.05659
Epoch 1/2, Step 620, dis_loss: 1.36597, gen_loss 0.742637
Epoch 1/2, Step 640, dis_loss: 1.35806, gen_loss 0.718436
Epoch 1/2, Step 660, dis_loss: 1.33565, gen_loss 0.782622
Epoch 1/2, Step 680, dis_loss: 1.45952, gen_loss 0.521412
Epoch 1/2, Step 700, dis_loss: 1.36353, gen_loss 0.723258
Epoch 1/2, Step 720, dis_loss: 1.38773, gen_loss 0.974549
Epoch 1/2, Step 740, dis_loss: 1.44608, gen_loss 0.563421
Epoch 1/2, Step 760, dis_loss: 1.33411, gen_loss 0.794094
Epoch 1/2, Step 780, dis_loss: 1.38203, gen_loss 1.07564
Epoch 1/2, Step 800, dis_loss: 1.38582, gen_loss 1.10757
Epoch 1/2, Step 820, dis_loss: 1.41781, gen_loss 1.03999
Epoch 1/2, Step 840, dis_loss: 1.36237, gen_loss 0.859633
Epoch 1/2, Step 860, dis_loss: 1.35035, gen_loss 0.680876
Epoch 1/2, Step 880, dis_loss: 1.43366, gen_loss 0.549975
Epoch 1/2, Step 900, dis_loss: 1.42284, gen_loss 1.20461
Epoch 1/2, Step 920, dis_loss: 1.39743, gen_loss 0.676019
Epoch 1/2, Step 940, dis_loss: 1.36414, gen_loss 0.678818
Epoch 1/2, Step 960, dis_loss: 1.35707, gen_loss 0.831934
Epoch 1/2, Step 980, dis_loss: 1.32745, gen_loss 0.839262
Epoch 1/2, Step 1000, dis_loss: 1.34035, gen_loss 1.00204
Epoch 1/2, Step 1020, dis_loss: 1.4065, gen_loss 0.980394
Epoch 1/2, Step 1040, dis_loss: 1.42469, gen_loss 1.08028
Epoch 1/2, Step 1060, dis_loss: 1.35592, gen_loss 0.676189
Epoch 1/2, Step 1080, dis_loss: 1.36594, gen_loss 0.717816
Epoch 1/2, Step 1100, dis_loss: 1.37135, gen_loss 0.76512
Epoch 1/2, Step 1120, dis_loss: 1.36741, gen_loss 1.0282
Epoch 1/2, Step 1140, dis_loss: 1.33974, gen_loss 0.890373
Epoch 1/2, Step 1160, dis_loss: 1.38292, gen_loss 1.06123
Epoch 1/2, Step 1180, dis_loss: 1.3631, gen_loss 1.05714
Epoch 1/2, Step 1200, dis_loss: 1.43232, gen_loss 1.101
Epoch 1/2, Step 1220, dis_loss: 1.35458, gen_loss 0.868588
Epoch 1/2, Step 1240, dis_loss: 1.36688, gen_loss 0.7439
Epoch 1/2, Step 1260, dis_loss: 1.41466, gen_loss 0.578246
Epoch 1/2, Step 1280, dis_loss: 1.37318, gen_loss 0.687885
Epoch 1/2, Step 1300, dis_loss: 1.38387, gen_loss 0.604549
Epoch 1/2, Step 1320, dis_loss: 1.34597, gen_loss 0.788213
Epoch 1/2, Step 1340, dis_loss: 1.45165, gen_loss 0.551592
Epoch 1/2, Step 1360, dis_loss: 1.34595, gen_loss 0.748387
Epoch 1/2, Step 1380, dis_loss: 1.3794, gen_loss 0.72826
Epoch 1/2, Step 1400, dis_loss: 1.38342, gen_loss 0.985844
Epoch 1/2, Step 1420, dis_loss: 1.45897, gen_loss 1.23185
Epoch 1/2, Step 1440, dis_loss: 1.36032, gen_loss 0.664073
Epoch 1/2, Step 1460, dis_loss: 1.38939, gen_loss 0.721103
Epoch 1/2, Step 1480, dis_loss: 1.40183, gen_loss 0.638056
Epoch 1/2, Step 1500, dis_loss: 1.38423, gen_loss 0.921034
Epoch 1/2, Step 1520, dis_loss: 1.36667, gen_loss 0.927173
Epoch 1/2, Step 1540, dis_loss: 1.42526, gen_loss 1.10868
Epoch 1/2, Step 1560, dis_loss: 1.40428, gen_loss 0.994344
Epoch 1/2, Step 1580, dis_loss: 1.40051, gen_loss 0.598101
Epoch 1/2, Step 1600, dis_loss: 1.38502, gen_loss 0.755477
Epoch 1/2, Step 1620, dis_loss: 1.39639, gen_loss 0.652384
Epoch 1/2, Step 1640, dis_loss: 1.37724, gen_loss 0.920059
Epoch 1/2, Step 1660, dis_loss: 1.40137, gen_loss 0.594563
Epoch 1/2, Step 1680, dis_loss: 1.42271, gen_loss 0.585432
Epoch 1/2, Step 1700, dis_loss: 1.37895, gen_loss 0.643579
Epoch 1/2, Step 1720, dis_loss: 1.39833, gen_loss 0.930826
Epoch 1/2, Step 1740, dis_loss: 1.40104, gen_loss 0.938722
Epoch 1/2, Step 1760, dis_loss: 1.3762, gen_loss 0.76983
Epoch 1/2, Step 1780, dis_loss: 1.42312, gen_loss 0.59312
Epoch 1/2, Step 1800, dis_loss: 1.3738, gen_loss 0.914448
Epoch 1/2, Step 1820, dis_loss: 1.37711, gen_loss 0.936639
Epoch 1/2, Step 1840, dis_loss: 1.421, gen_loss 0.577647
Epoch 1/2, Step 1860, dis_loss: 1.36163, gen_loss 0.859544
Epoch 2/2, Step 1880, dis_loss: 1.36863, gen_loss 0.826841
Epoch 2/2, Step 1900, dis_loss: 1.38809, gen_loss 0.977807
Epoch 2/2, Step 1920, dis_loss: 1.39334, gen_loss 0.633859
Epoch 2/2, Step 1940, dis_loss: 1.39158, gen_loss 0.621416
Epoch 2/2, Step 1960, dis_loss: 1.37576, gen_loss 0.734568
Epoch 2/2, Step 1980, dis_loss: 1.3712, gen_loss 0.710285
Epoch 2/2, Step 2000, dis_loss: 1.38646, gen_loss 0.627679
Epoch 2/2, Step 2020, dis_loss: 1.35954, gen_loss 0.745372
Epoch 2/2, Step 2040, dis_loss: 1.36584, gen_loss 0.808891
Epoch 2/2, Step 2060, dis_loss: 1.40842, gen_loss 1.08465
Epoch 2/2, Step 2080, dis_loss: 1.37103, gen_loss 0.8241
Epoch 2/2, Step 2100, dis_loss: 1.42103, gen_loss 1.02771
Epoch 2/2, Step 2120, dis_loss: 1.36904, gen_loss 0.749086
Epoch 2/2, Step 2140, dis_loss: 1.37181, gen_loss 0.940874
Epoch 2/2, Step 2160, dis_loss: 1.37105, gen_loss 0.949864
Epoch 2/2, Step 2180, dis_loss: 1.4177, gen_loss 1.12841
Epoch 2/2, Step 2200, dis_loss: 1.37485, gen_loss 0.821938
Epoch 2/2, Step 2220, dis_loss: 1.37201, gen_loss 0.90798
Epoch 2/2, Step 2240, dis_loss: 1.37489, gen_loss 1.00739
Epoch 2/2, Step 2260, dis_loss: 1.39786, gen_loss 1.0183
Epoch 2/2, Step 2280, dis_loss: 1.38408, gen_loss 0.692619
Epoch 2/2, Step 2300, dis_loss: 1.3781, gen_loss 0.780862
Epoch 2/2, Step 2320, dis_loss: 1.37006, gen_loss 0.745667
Epoch 2/2, Step 2340, dis_loss: 1.42293, gen_loss 0.586998
Epoch 2/2, Step 2360, dis_loss: 1.42509, gen_loss 1.1334
Epoch 2/2, Step 2380, dis_loss: 1.40824, gen_loss 1.01861
Epoch 2/2, Step 2400, dis_loss: 1.38547, gen_loss 1.0142
Epoch 2/2, Step 2420, dis_loss: 1.37762, gen_loss 0.850553
Epoch 2/2, Step 2440, dis_loss: 1.37703, gen_loss 0.976974
Epoch 2/2, Step 2460, dis_loss: 1.4341, gen_loss 1.12776
Epoch 2/2, Step 2480, dis_loss: 1.37307, gen_loss 0.786014
Epoch 2/2, Step 2500, dis_loss: 1.34764, gen_loss 0.763274
Epoch 2/2, Step 2520, dis_loss: 1.39414, gen_loss 0.649053
Epoch 2/2, Step 2540, dis_loss: 1.3996, gen_loss 0.624494
Epoch 2/2, Step 2560, dis_loss: 1.36783, gen_loss 0.755083
Epoch 2/2, Step 2580, dis_loss: 1.39821, gen_loss 0.654156
Epoch 2/2, Step 2600, dis_loss: 1.39116, gen_loss 0.654445
Epoch 2/2, Step 2620, dis_loss: 1.4001, gen_loss 1.00843
Epoch 2/2, Step 2640, dis_loss: 1.38052, gen_loss 0.994136
Epoch 2/2, Step 2660, dis_loss: 1.38346, gen_loss 0.938507
Epoch 2/2, Step 2680, dis_loss: 1.37342, gen_loss 0.856344
Epoch 2/2, Step 2700, dis_loss: 1.39161, gen_loss 0.983509
Epoch 2/2, Step 2720, dis_loss: 1.44269, gen_loss 1.18273
Epoch 2/2, Step 2740, dis_loss: 1.38983, gen_loss 0.916097
Epoch 2/2, Step 2760, dis_loss: 1.38002, gen_loss 0.925842
Epoch 2/2, Step 2780, dis_loss: 1.39071, gen_loss 0.67481
Epoch 2/2, Step 2800, dis_loss: 1.40277, gen_loss 1.03046
Epoch 2/2, Step 2820, dis_loss: 1.38822, gen_loss 1.02506
Epoch 2/2, Step 2840, dis_loss: 1.38821, gen_loss 0.994976
Epoch 2/2, Step 2860, dis_loss: 1.38315, gen_loss 0.693542
Epoch 2/2, Step 2880, dis_loss: 1.36769, gen_loss 0.749656
Epoch 2/2, Step 2900, dis_loss: 1.40621, gen_loss 0.634951
Epoch 2/2, Step 2920, dis_loss: 1.39843, gen_loss 0.626083
Epoch 2/2, Step 2940, dis_loss: 1.37555, gen_loss 0.697042
Epoch 2/2, Step 2960, dis_loss: 1.37956, gen_loss 0.726964
Epoch 2/2, Step 2980, dis_loss: 1.37601, gen_loss 0.705517
Epoch 2/2, Step 3000, dis_loss: 1.39211, gen_loss 0.697305
Epoch 2/2, Step 3020, dis_loss: 1.40025, gen_loss 0.636819
Epoch 2/2, Step 3040, dis_loss: 1.38671, gen_loss 0.682997
Epoch 2/2, Step 3060, dis_loss: 1.37619, gen_loss 0.73702
Epoch 2/2, Step 3080, dis_loss: 1.38486, gen_loss 0.66476
Epoch 2/2, Step 3100, dis_loss: 1.36435, gen_loss 0.805069
Epoch 2/2, Step 3120, dis_loss: 1.44245, gen_loss 1.17465
Epoch 2/2, Step 3140, dis_loss: 1.37112, gen_loss 0.866424
Epoch 2/2, Step 3160, dis_loss: 1.37394, gen_loss 0.842498
Epoch 2/2, Step 3180, dis_loss: 1.40375, gen_loss 1.02666
Epoch 2/2, Step 3200, dis_loss: 1.37964, gen_loss 0.921467
Epoch 2/2, Step 3220, dis_loss: 1.40778, gen_loss 1.03082
Epoch 2/2, Step 3240, dis_loss: 1.40779, gen_loss 0.986939
Epoch 2/2, Step 3260, dis_loss: 1.37203, gen_loss 0.846915
Epoch 2/2, Step 3280, dis_loss: 1.37256, gen_loss 0.913017
Epoch 2/2, Step 3300, dis_loss: 1.36184, gen_loss 0.831734
Epoch 2/2, Step 3320, dis_loss: 1.39551, gen_loss 1.03743
Epoch 2/2, Step 3340, dis_loss: 1.40287, gen_loss 1.01607
Epoch 2/2, Step 3360, dis_loss: 1.39139, gen_loss 0.98382
Epoch 2/2, Step 3380, dis_loss: 1.3768, gen_loss 0.76483
Epoch 2/2, Step 3400, dis_loss: 1.37219, gen_loss 0.853549
Epoch 2/2, Step 3420, dis_loss: 1.41907, gen_loss 1.07517
Epoch 2/2, Step 3440, dis_loss: 1.37496, gen_loss 0.926069
Epoch 2/2, Step 3460, dis_loss: 1.38748, gen_loss 0.884222
Epoch 2/2, Step 3480, dis_loss: 1.38424, gen_loss 0.914044
Epoch 2/2, Step 3500, dis_loss: 1.40251, gen_loss 1.01875
Epoch 2/2, Step 3520, dis_loss: 1.38195, gen_loss 0.912884
Epoch 2/2, Step 3540, dis_loss: 1.39098, gen_loss 0.948551
Epoch 2/2, Step 3560, dis_loss: 1.38093, gen_loss 0.919304
Epoch 2/2, Step 3580, dis_loss: 1.40795, gen_loss 1.0884
Epoch 2/2, Step 3600, dis_loss: 1.37597, gen_loss 0.874777
Epoch 2/2, Step 3620, dis_loss: 1.38989, gen_loss 0.960279
Epoch 2/2, Step 3640, dis_loss: 1.38425, gen_loss 0.928568
Epoch 2/2, Step 3660, dis_loss: 1.38667, gen_loss 0.936746
Epoch 2/2, Step 3680, dis_loss: 1.38073, gen_loss 0.886302
Epoch 2/2, Step 3700, dis_loss: 1.39381, gen_loss 0.977235
Epoch 2/2, Step 3720, dis_loss: 1.39083, gen_loss 0.998912
Epoch 2/2, Step 3740, dis_loss: 1.38843, gen_loss 0.936581
In [ ]:
batch_size = 32
z_dim = 200
l_rate = 0.0005
beta1 = 0.5

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, l_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset.image_mode)
Epoch 1/1, Step 0, dis_loss: 0.510961, gen_loss 9.5482
Epoch 1/1, Step 20, dis_loss: 1.44386, gen_loss 1.15226
Epoch 1/1, Step 40, dis_loss: 1.54431, gen_loss 2.00105
Epoch 1/1, Step 60, dis_loss: 1.43795, gen_loss 1.29646
Epoch 1/1, Step 80, dis_loss: 1.37138, gen_loss 1.06847
Epoch 1/1, Step 100, dis_loss: 1.28943, gen_loss 1.0432
Epoch 1/1, Step 120, dis_loss: 1.36601, gen_loss 1.0367
Epoch 1/1, Step 140, dis_loss: 1.3319, gen_loss 0.794015
Epoch 1/1, Step 160, dis_loss: 1.39614, gen_loss 0.993125
Epoch 1/1, Step 180, dis_loss: 1.37769, gen_loss 0.898291
Epoch 1/1, Step 200, dis_loss: 1.38748, gen_loss 0.979001
Epoch 1/1, Step 220, dis_loss: 1.34139, gen_loss 0.806651
Epoch 1/1, Step 240, dis_loss: 1.37368, gen_loss 0.780913
Epoch 1/1, Step 260, dis_loss: 1.41361, gen_loss 0.699067
Epoch 1/1, Step 280, dis_loss: 1.40562, gen_loss 0.743626
Epoch 1/1, Step 300, dis_loss: 1.38633, gen_loss 0.817687
Epoch 1/1, Step 320, dis_loss: 1.40027, gen_loss 0.722994
Epoch 1/1, Step 340, dis_loss: 1.38872, gen_loss 0.892276
Epoch 1/1, Step 360, dis_loss: 1.35825, gen_loss 0.850097
Epoch 1/1, Step 380, dis_loss: 1.39004, gen_loss 0.784219
Epoch 1/1, Step 400, dis_loss: 1.3695, gen_loss 0.963697
Epoch 1/1, Step 420, dis_loss: 1.39182, gen_loss 0.860339
Epoch 1/1, Step 440, dis_loss: 1.40902, gen_loss 0.88338
Epoch 1/1, Step 460, dis_loss: 1.31536, gen_loss 0.81257
Epoch 1/1, Step 480, dis_loss: 1.36234, gen_loss 0.826028
Epoch 1/1, Step 500, dis_loss: 1.3649, gen_loss 0.785331
Epoch 1/1, Step 520, dis_loss: 1.3817, gen_loss 0.947048
Epoch 1/1, Step 540, dis_loss: 1.39616, gen_loss 0.827243
Epoch 1/1, Step 560, dis_loss: 1.39491, gen_loss 0.910237
Epoch 1/1, Step 580, dis_loss: 1.33512, gen_loss 0.909117
Epoch 1/1, Step 600, dis_loss: 1.4107, gen_loss 0.760819
Epoch 1/1, Step 620, dis_loss: 1.37215, gen_loss 0.752075
Epoch 1/1, Step 640, dis_loss: 1.36197, gen_loss 1.0175
Epoch 1/1, Step 660, dis_loss: 1.36422, gen_loss 0.847082
Epoch 1/1, Step 680, dis_loss: 1.38224, gen_loss 0.846382
Epoch 1/1, Step 700, dis_loss: 1.38814, gen_loss 0.735891
Epoch 1/1, Step 720, dis_loss: 1.3646, gen_loss 0.891305
Epoch 1/1, Step 740, dis_loss: 1.37327, gen_loss 0.955055
Epoch 1/1, Step 760, dis_loss: 1.40433, gen_loss 0.737149
Epoch 1/1, Step 780, dis_loss: 1.3652, gen_loss 0.933847
Epoch 1/1, Step 800, dis_loss: 1.36054, gen_loss 0.879053
Epoch 1/1, Step 820, dis_loss: 1.38393, gen_loss 0.748566
Epoch 1/1, Step 840, dis_loss: 1.35179, gen_loss 0.821221
Epoch 1/1, Step 860, dis_loss: 1.36598, gen_loss 1.21846
Epoch 1/1, Step 880, dis_loss: 1.4043, gen_loss 0.66958
Epoch 1/1, Step 900, dis_loss: 1.34669, gen_loss 0.917714
Epoch 1/1, Step 920, dis_loss: 1.37883, gen_loss 0.957281
Epoch 1/1, Step 940, dis_loss: 1.35455, gen_loss 0.782395
Epoch 1/1, Step 960, dis_loss: 1.39723, gen_loss 0.934831
Epoch 1/1, Step 980, dis_loss: 1.39091, gen_loss 0.828087
Epoch 1/1, Step 1000, dis_loss: 1.36847, gen_loss 0.717024
Epoch 1/1, Step 1020, dis_loss: 1.36965, gen_loss 0.812519
Epoch 1/1, Step 1040, dis_loss: 1.38163, gen_loss 0.708245
Epoch 1/1, Step 1060, dis_loss: 1.37613, gen_loss 0.92014
Epoch 1/1, Step 1080, dis_loss: 1.38439, gen_loss 0.884775
Epoch 1/1, Step 1100, dis_loss: 1.40647, gen_loss 1.04592
Epoch 1/1, Step 1120, dis_loss: 1.37206, gen_loss 0.782258
Epoch 1/1, Step 1140, dis_loss: 1.40292, gen_loss 0.902734
Epoch 1/1, Step 1160, dis_loss: 1.36453, gen_loss 0.707713
Epoch 1/1, Step 1180, dis_loss: 1.36611, gen_loss 0.936665
Epoch 1/1, Step 1200, dis_loss: 1.36969, gen_loss 0.93349
Epoch 1/1, Step 1220, dis_loss: 1.38059, gen_loss 0.774684
Epoch 1/1, Step 1240, dis_loss: 1.39392, gen_loss 0.711258
Epoch 1/1, Step 1260, dis_loss: 1.35384, gen_loss 0.867473
Epoch 1/1, Step 1280, dis_loss: 1.37802, gen_loss 0.811175
Epoch 1/1, Step 1300, dis_loss: 1.37469, gen_loss 0.873108
Epoch 1/1, Step 1320, dis_loss: 1.40887, gen_loss 1.02322
Epoch 1/1, Step 1340, dis_loss: 1.3898, gen_loss 0.924171
Epoch 1/1, Step 1360, dis_loss: 1.37587, gen_loss 0.885555
Epoch 1/1, Step 1380, dis_loss: 1.37967, gen_loss 0.932679
Epoch 1/1, Step 1400, dis_loss: 1.38943, gen_loss 0.812159
Epoch 1/1, Step 1420, dis_loss: 1.34706, gen_loss 0.829275
Epoch 1/1, Step 1440, dis_loss: 1.3694, gen_loss 0.795278
Epoch 1/1, Step 1460, dis_loss: 1.39365, gen_loss 0.762586
Epoch 1/1, Step 1480, dis_loss: 1.36578, gen_loss 0.720172
Epoch 1/1, Step 1500, dis_loss: 1.37119, gen_loss 0.853146
Epoch 1/1, Step 1520, dis_loss: 1.34509, gen_loss 0.928706
Epoch 1/1, Step 1540, dis_loss: 1.37118, gen_loss 0.820603
Epoch 1/1, Step 1560, dis_loss: 1.37734, gen_loss 0.861409
Epoch 1/1, Step 1580, dis_loss: 1.36718, gen_loss 0.909315
Epoch 1/1, Step 1600, dis_loss: 1.37772, gen_loss 0.875726
Epoch 1/1, Step 1620, dis_loss: 1.3695, gen_loss 0.843848
Epoch 1/1, Step 1640, dis_loss: 1.3692, gen_loss 0.709986
Epoch 1/1, Step 1660, dis_loss: 1.39553, gen_loss 0.867032
Epoch 1/1, Step 1680, dis_loss: 1.37139, gen_loss 0.969221
Epoch 1/1, Step 1700, dis_loss: 1.38707, gen_loss 0.990235
Epoch 1/1, Step 1720, dis_loss: 1.40393, gen_loss 1.07841
Epoch 1/1, Step 1740, dis_loss: 1.36931, gen_loss 0.790071
Epoch 1/1, Step 1760, dis_loss: 1.39385, gen_loss 1.04094
Epoch 1/1, Step 1780, dis_loss: 1.36705, gen_loss 0.870991
Epoch 1/1, Step 1800, dis_loss: 1.38109, gen_loss 0.893768
Epoch 1/1, Step 1820, dis_loss: 1.32788, gen_loss 0.718625
Epoch 1/1, Step 1840, dis_loss: 1.40213, gen_loss 1.03803
Epoch 1/1, Step 1860, dis_loss: 1.37724, gen_loss 0.763587
Epoch 1/1, Step 1880, dis_loss: 1.37125, gen_loss 0.863222
Epoch 1/1, Step 1900, dis_loss: 1.38811, gen_loss 0.727847
Epoch 1/1, Step 1920, dis_loss: 1.37218, gen_loss 0.742589
Epoch 1/1, Step 1940, dis_loss: 1.37106, gen_loss 0.734665
Epoch 1/1, Step 1960, dis_loss: 1.34571, gen_loss 0.829866
Epoch 1/1, Step 1980, dis_loss: 1.38235, gen_loss 0.892603
Epoch 1/1, Step 2000, dis_loss: 1.39891, gen_loss 1.15477
Epoch 1/1, Step 2020, dis_loss: 1.37887, gen_loss 0.737789
Epoch 1/1, Step 2040, dis_loss: 1.37915, gen_loss 0.914601
Epoch 1/1, Step 2060, dis_loss: 1.38092, gen_loss 0.696437
Epoch 1/1, Step 2080, dis_loss: 1.37085, gen_loss 0.784539
Epoch 1/1, Step 2100, dis_loss: 1.37818, gen_loss 0.771046
Epoch 1/1, Step 2120, dis_loss: 1.37251, gen_loss 0.781492
Epoch 1/1, Step 2140, dis_loss: 1.36485, gen_loss 0.762259
Epoch 1/1, Step 2160, dis_loss: 1.37902, gen_loss 0.734614
Epoch 1/1, Step 2180, dis_loss: 1.3771, gen_loss 0.824158
Epoch 1/1, Step 2200, dis_loss: 1.38686, gen_loss 0.884132
Epoch 1/1, Step 2220, dis_loss: 1.37708, gen_loss 0.907445
Epoch 1/1, Step 2240, dis_loss: 1.36288, gen_loss 0.747469
Epoch 1/1, Step 2260, dis_loss: 1.38481, gen_loss 0.882333
Epoch 1/1, Step 2280, dis_loss: 1.391, gen_loss 0.741914
Epoch 1/1, Step 2300, dis_loss: 1.35453, gen_loss 0.651561
Epoch 1/1, Step 2320, dis_loss: 1.37637, gen_loss 0.826798
Epoch 1/1, Step 2340, dis_loss: 1.36588, gen_loss 0.738191
Epoch 1/1, Step 2360, dis_loss: 1.3831, gen_loss 0.960121
Epoch 1/1, Step 2380, dis_loss: 1.36038, gen_loss 0.699854
Epoch 1/1, Step 2400, dis_loss: 1.37709, gen_loss 0.833362
Epoch 1/1, Step 2420, dis_loss: 1.36577, gen_loss 0.81693
Epoch 1/1, Step 2440, dis_loss: 1.37874, gen_loss 0.770609
Epoch 1/1, Step 2460, dis_loss: 1.38617, gen_loss 0.648793
Epoch 1/1, Step 2480, dis_loss: 1.37821, gen_loss 0.812924
Epoch 1/1, Step 2500, dis_loss: 1.36382, gen_loss 0.914566
Epoch 1/1, Step 2520, dis_loss: 1.35936, gen_loss 0.963206
Epoch 1/1, Step 2540, dis_loss: 1.34915, gen_loss 0.871829
Epoch 1/1, Step 2560, dis_loss: 1.38725, gen_loss 0.880519
Epoch 1/1, Step 2580, dis_loss: 1.38437, gen_loss 0.996796
Epoch 1/1, Step 2600, dis_loss: 1.37103, gen_loss 0.812384
Epoch 1/1, Step 2620, dis_loss: 1.36077, gen_loss 0.87673
Epoch 1/1, Step 2640, dis_loss: 1.37725, gen_loss 0.847986
Epoch 1/1, Step 2660, dis_loss: 1.39252, gen_loss 0.722542
Epoch 1/1, Step 2680, dis_loss: 1.38495, gen_loss 0.820716
Epoch 1/1, Step 2700, dis_loss: 1.38794, gen_loss 1.01862
Epoch 1/1, Step 2720, dis_loss: 1.38167, gen_loss 0.801429
Epoch 1/1, Step 2740, dis_loss: 1.38417, gen_loss 0.779749
Epoch 1/1, Step 2760, dis_loss: 1.37954, gen_loss 0.783471
Epoch 1/1, Step 2780, dis_loss: 1.39115, gen_loss 0.654929
Epoch 1/1, Step 2800, dis_loss: 1.38224, gen_loss 0.852092
Epoch 1/1, Step 2820, dis_loss: 1.3787, gen_loss 0.804005
Epoch 1/1, Step 2840, dis_loss: 1.36365, gen_loss 0.728578
Epoch 1/1, Step 2860, dis_loss: 1.37229, gen_loss 0.715674
Epoch 1/1, Step 2880, dis_loss: 1.36442, gen_loss 0.755321
Epoch 1/1, Step 2900, dis_loss: 1.36527, gen_loss 1.04336
Epoch 1/1, Step 2920, dis_loss: 1.36499, gen_loss 0.883142
Epoch 1/1, Step 2940, dis_loss: 1.32185, gen_loss 0.865154
Epoch 1/1, Step 2960, dis_loss: 1.3791, gen_loss 0.699731
Epoch 1/1, Step 2980, dis_loss: 1.34056, gen_loss 0.905421
Epoch 1/1, Step 3000, dis_loss: 1.37246, gen_loss 0.733068
Epoch 1/1, Step 3020, dis_loss: 1.37287, gen_loss 0.70567
Epoch 1/1, Step 3040, dis_loss: 1.37049, gen_loss 0.873043
Epoch 1/1, Step 3060, dis_loss: 1.37867, gen_loss 0.799395
Epoch 1/1, Step 3080, dis_loss: 1.37185, gen_loss 0.822178
Epoch 1/1, Step 3100, dis_loss: 1.37721, gen_loss 0.891483
Epoch 1/1, Step 3120, dis_loss: 1.38279, gen_loss 0.749153
Epoch 1/1, Step 3140, dis_loss: 1.37877, gen_loss 0.874483
Epoch 1/1, Step 3160, dis_loss: 1.37907, gen_loss 0.74119
Epoch 1/1, Step 3180, dis_loss: 1.37206, gen_loss 0.80988
Epoch 1/1, Step 3200, dis_loss: 1.38282, gen_loss 0.811274
Epoch 1/1, Step 3220, dis_loss: 1.34611, gen_loss 0.868135
Epoch 1/1, Step 3240, dis_loss: 1.37117, gen_loss 0.743653
Epoch 1/1, Step 3260, dis_loss: 1.37243, gen_loss 0.896824
Epoch 1/1, Step 3280, dis_loss: 1.36969, gen_loss 0.781349
Epoch 1/1, Step 3300, dis_loss: 1.37519, gen_loss 0.832265
Epoch 1/1, Step 3320, dis_loss: 1.35601, gen_loss 0.781212
Epoch 1/1, Step 3340, dis_loss: 1.36054, gen_loss 0.870402
Epoch 1/1, Step 3360, dis_loss: 1.36519, gen_loss 0.731521
Epoch 1/1, Step 3380, dis_loss: 1.36213, gen_loss 0.83411
Epoch 1/1, Step 3400, dis_loss: 1.36672, gen_loss 0.828114
Epoch 1/1, Step 3420, dis_loss: 1.38835, gen_loss 0.986881
Epoch 1/1, Step 3440, dis_loss: 1.36711, gen_loss 0.746989
Epoch 1/1, Step 3460, dis_loss: 1.36293, gen_loss 0.795467
Epoch 1/1, Step 3480, dis_loss: 1.38934, gen_loss 0.639105
Epoch 1/1, Step 3500, dis_loss: 1.36099, gen_loss 0.875544
Epoch 1/1, Step 3520, dis_loss: 1.37242, gen_loss 0.888936
Epoch 1/1, Step 3540, dis_loss: 1.38418, gen_loss 0.74635
Epoch 1/1, Step 3560, dis_loss: 1.37628, gen_loss 0.785668
Epoch 1/1, Step 3580, dis_loss: 1.37226, gen_loss 0.865346
Epoch 1/1, Step 3600, dis_loss: 1.38349, gen_loss 0.745543
Epoch 1/1, Step 3620, dis_loss: 1.37345, gen_loss 0.9158
Epoch 1/1, Step 3640, dis_loss: 1.3807, gen_loss 0.871564
Epoch 1/1, Step 3660, dis_loss: 1.38496, gen_loss 0.879462
Epoch 1/1, Step 3680, dis_loss: 1.36721, gen_loss 0.775365
Epoch 1/1, Step 3700, dis_loss: 1.35954, gen_loss 0.790207
Epoch 1/1, Step 3720, dis_loss: 1.37987, gen_loss 0.792847
Epoch 1/1, Step 3740, dis_loss: 1.35707, gen_loss 0.864568
Epoch 1/1, Step 3760, dis_loss: 1.36394, gen_loss 0.833929
Epoch 1/1, Step 3780, dis_loss: 1.36441, gen_loss 0.891932
Epoch 1/1, Step 3800, dis_loss: 1.34527, gen_loss 0.932266
Epoch 1/1, Step 3820, dis_loss: 1.35881, gen_loss 0.774891
Epoch 1/1, Step 3840, dis_loss: 1.37509, gen_loss 0.948394
Epoch 1/1, Step 3860, dis_loss: 1.3562, gen_loss 0.978765
Epoch 1/1, Step 3880, dis_loss: 1.36894, gen_loss 0.803447
Epoch 1/1, Step 3900, dis_loss: 1.36155, gen_loss 0.770908
Epoch 1/1, Step 3920, dis_loss: 1.38395, gen_loss 0.896278
Epoch 1/1, Step 3940, dis_loss: 1.35707, gen_loss 0.883262
Epoch 1/1, Step 3960, dis_loss: 1.37496, gen_loss 0.920126
Epoch 1/1, Step 3980, dis_loss: 1.36859, gen_loss 0.779236
Epoch 1/1, Step 4000, dis_loss: 1.36513, gen_loss 0.850331
Epoch 1/1, Step 4020, dis_loss: 1.35927, gen_loss 0.768816
Epoch 1/1, Step 4040, dis_loss: 1.38283, gen_loss 0.831263
Epoch 1/1, Step 4060, dis_loss: 1.38393, gen_loss 0.828298
Epoch 1/1, Step 4080, dis_loss: 1.3623, gen_loss 0.75767
Epoch 1/1, Step 4100, dis_loss: 1.36463, gen_loss 0.891455
Epoch 1/1, Step 4120, dis_loss: 1.35102, gen_loss 0.831293
Epoch 1/1, Step 4140, dis_loss: 1.37245, gen_loss 0.816676
Epoch 1/1, Step 4160, dis_loss: 1.3751, gen_loss 0.757703
Epoch 1/1, Step 4180, dis_loss: 1.37354, gen_loss 0.872885
Epoch 1/1, Step 4200, dis_loss: 1.36917, gen_loss 0.733197
Epoch 1/1, Step 4220, dis_loss: 1.35242, gen_loss 0.757186
Epoch 1/1, Step 4240, dis_loss: 1.37226, gen_loss 0.827043
Epoch 1/1, Step 4260, dis_loss: 1.37273, gen_loss 1.03405
Epoch 1/1, Step 4280, dis_loss: 1.37051, gen_loss 0.754721
Epoch 1/1, Step 4300, dis_loss: 1.36512, gen_loss 0.953997
Epoch 1/1, Step 4320, dis_loss: 1.36602, gen_loss 0.839538
Epoch 1/1, Step 4340, dis_loss: 1.38086, gen_loss 0.790702
Epoch 1/1, Step 4360, dis_loss: 1.37659, gen_loss 0.763028
Epoch 1/1, Step 4380, dis_loss: 1.37663, gen_loss 0.793417
Epoch 1/1, Step 4400, dis_loss: 1.37654, gen_loss 0.811273
Epoch 1/1, Step 4420, dis_loss: 1.37599, gen_loss 0.901095
Epoch 1/1, Step 4440, dis_loss: 1.36793, gen_loss 0.811947
Epoch 1/1, Step 4460, dis_loss: 1.36683, gen_loss 0.795889
Epoch 1/1, Step 4480, dis_loss: 1.3756, gen_loss 0.726636
Epoch 1/1, Step 4500, dis_loss: 1.37865, gen_loss 0.808602
Epoch 1/1, Step 4520, dis_loss: 1.37827, gen_loss 0.739471
Epoch 1/1, Step 4540, dis_loss: 1.37139, gen_loss 0.748925
Epoch 1/1, Step 4560, dis_loss: 1.35647, gen_loss 0.791829
Epoch 1/1, Step 4580, dis_loss: 1.37208, gen_loss 0.84307
Epoch 1/1, Step 4600, dis_loss: 1.37482, gen_loss 0.688123
Epoch 1/1, Step 4620, dis_loss: 1.37735, gen_loss 0.774784
Epoch 1/1, Step 4640, dis_loss: 1.3802, gen_loss 0.871349
Epoch 1/1, Step 4660, dis_loss: 1.37585, gen_loss 0.826972
Epoch 1/1, Step 4680, dis_loss: 1.36217, gen_loss 0.827674
Epoch 1/1, Step 4700, dis_loss: 1.36686, gen_loss 0.758943
Epoch 1/1, Step 4720, dis_loss: 1.37933, gen_loss 0.790491
Epoch 1/1, Step 4740, dis_loss: 1.36805, gen_loss 0.908134
Epoch 1/1, Step 4760, dis_loss: 1.37127, gen_loss 0.842013
Epoch 1/1, Step 4780, dis_loss: 1.37531, gen_loss 0.860464
Epoch 1/1, Step 4800, dis_loss: 1.36537, gen_loss 0.83763
Epoch 1/1, Step 4820, dis_loss: 1.37547, gen_loss 0.738048
Epoch 1/1, Step 4840, dis_loss: 1.36575, gen_loss 0.732688
Epoch 1/1, Step 4860, dis_loss: 1.37634, gen_loss 0.786897
Epoch 1/1, Step 4880, dis_loss: 1.37916, gen_loss 0.847849
Epoch 1/1, Step 4900, dis_loss: 1.35221, gen_loss 0.822829
Epoch 1/1, Step 4920, dis_loss: 1.37919, gen_loss 0.776833
Epoch 1/1, Step 4940, dis_loss: 1.36974, gen_loss 0.777616
Epoch 1/1, Step 4960, dis_loss: 1.34387, gen_loss 0.790266
Epoch 1/1, Step 4980, dis_loss: 1.37727, gen_loss 0.818015
Epoch 1/1, Step 5000, dis_loss: 1.37302, gen_loss 0.806539
Epoch 1/1, Step 5020, dis_loss: 1.38154, gen_loss 0.813397
Epoch 1/1, Step 5040, dis_loss: 1.36665, gen_loss 0.883563
Epoch 1/1, Step 5060, dis_loss: 1.36642, gen_loss 0.772851
Epoch 1/1, Step 5080, dis_loss: 1.37264, gen_loss 0.826214
Epoch 1/1, Step 5100, dis_loss: 1.35551, gen_loss 0.800848
Epoch 1/1, Step 5120, dis_loss: 1.37274, gen_loss 0.820624
Epoch 1/1, Step 5140, dis_loss: 1.35974, gen_loss 0.855683
Epoch 1/1, Step 5160, dis_loss: 1.35735, gen_loss 0.815349
Epoch 1/1, Step 5180, dis_loss: 1.37911, gen_loss 0.859033
Epoch 1/1, Step 5200, dis_loss: 1.3842, gen_loss 0.918869
Epoch 1/1, Step 5220, dis_loss: 1.37113, gen_loss 0.712162
Epoch 1/1, Step 5240, dis_loss: 1.37486, gen_loss 0.781417
Epoch 1/1, Step 5260, dis_loss: 1.35968, gen_loss 0.920982
Epoch 1/1, Step 5280, dis_loss: 1.37198, gen_loss 0.806942
Epoch 1/1, Step 5300, dis_loss: 1.366, gen_loss 0.876767
Epoch 1/1, Step 5320, dis_loss: 1.36246, gen_loss 0.845137
Epoch 1/1, Step 5340, dis_loss: 1.3738, gen_loss 0.863006
Epoch 1/1, Step 5360, dis_loss: 1.36804, gen_loss 0.827093
Epoch 1/1, Step 5380, dis_loss: 1.37512, gen_loss 0.780261
Epoch 1/1, Step 5400, dis_loss: 1.37964, gen_loss 0.808325
Epoch 1/1, Step 5420, dis_loss: 1.37413, gen_loss 0.809414
Epoch 1/1, Step 5440, dis_loss: 1.37353, gen_loss 0.900865
Epoch 1/1, Step 5460, dis_loss: 1.35972, gen_loss 0.747259
Epoch 1/1, Step 5480, dis_loss: 1.35476, gen_loss 0.838533
Epoch 1/1, Step 5500, dis_loss: 1.37192, gen_loss 0.866414
Epoch 1/1, Step 5520, dis_loss: 1.36721, gen_loss 0.90606
Epoch 1/1, Step 5540, dis_loss: 1.377, gen_loss 0.779313
Epoch 1/1, Step 5560, dis_loss: 1.38218, gen_loss 0.793565
Epoch 1/1, Step 5580, dis_loss: 1.38018, gen_loss 0.830047
Epoch 1/1, Step 5600, dis_loss: 1.37117, gen_loss 0.761477
Epoch 1/1, Step 5620, dis_loss: 1.37966, gen_loss 0.811707
Epoch 1/1, Step 5640, dis_loss: 1.37302, gen_loss 0.812735
Epoch 1/1, Step 5660, dis_loss: 1.37445, gen_loss 0.827309
Epoch 1/1, Step 5680, dis_loss: 1.36992, gen_loss 0.832357
Epoch 1/1, Step 5700, dis_loss: 1.37323, gen_loss 0.873659
Epoch 1/1, Step 5720, dis_loss: 1.37313, gen_loss 0.816486
Epoch 1/1, Step 5740, dis_loss: 1.3736, gen_loss 0.8076
Epoch 1/1, Step 5760, dis_loss: 1.35448, gen_loss 0.959452
Epoch 1/1, Step 5780, dis_loss: 1.368, gen_loss 0.895405
Epoch 1/1, Step 5800, dis_loss: 1.35801, gen_loss 0.958067
Epoch 1/1, Step 5820, dis_loss: 1.36196, gen_loss 0.810629
Epoch 1/1, Step 5840, dis_loss: 1.37785, gen_loss 0.84454
Epoch 1/1, Step 5860, dis_loss: 1.37108, gen_loss 0.766794
Epoch 1/1, Step 5880, dis_loss: 1.36339, gen_loss 0.812348
Epoch 1/1, Step 5900, dis_loss: 1.36008, gen_loss 0.733862
Epoch 1/1, Step 5920, dis_loss: 1.38188, gen_loss 0.804419
Epoch 1/1, Step 5940, dis_loss: 1.37782, gen_loss 0.75887
Epoch 1/1, Step 5960, dis_loss: 1.37403, gen_loss 0.769447
Epoch 1/1, Step 5980, dis_loss: 1.37379, gen_loss 0.779953
Epoch 1/1, Step 6000, dis_loss: 1.36206, gen_loss 0.778569
Epoch 1/1, Step 6020, dis_loss: 1.3758, gen_loss 0.804181
Epoch 1/1, Step 6040, dis_loss: 1.37603, gen_loss 0.869677
Epoch 1/1, Step 6060, dis_loss: 1.36708, gen_loss 0.812058
Epoch 1/1, Step 6080, dis_loss: 1.36631, gen_loss 0.82974
Epoch 1/1, Step 6100, dis_loss: 1.36126, gen_loss 0.742137
Epoch 1/1, Step 6120, dis_loss: 1.38042, gen_loss 0.767312
Epoch 1/1, Step 6140, dis_loss: 1.35481, gen_loss 0.859678
Epoch 1/1, Step 6160, dis_loss: 1.35977, gen_loss 0.827259
Epoch 1/1, Step 6180, dis_loss: 1.37306, gen_loss 0.850596
Epoch 1/1, Step 6200, dis_loss: 1.36563, gen_loss 0.89083
Epoch 1/1, Step 6220, dis_loss: 1.37225, gen_loss 0.832022

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.

In [ ]:
 
In [ ]: